241 research outputs found

    Information recovery from rank-order encoded images

    Get PDF
    The time to detection of a visual stimulus by the primate eye is recorded at 100 ā€“ 150ms. This near instantaneous recognition is in spite of the considerable processing required by the several stages of the visual pathway to recognise and react to a visual scene. How this is achieved is still a matter of speculation. Rank-order codes have been proposed as a means of encoding by the primate eye in the rapid transmission of the initial burst of information from the sensory neurons to the brain. We study the efficiency of rank-order codes in encoding perceptually-important information in an image. VanRullen and Thorpe built a model of the ganglion cell layers of the retina to simulate and study the viability of rank-order as a means of encoding by retinal neurons. We validate their model and quantify the information retrieved from rank-order encoded images in terms of the visually-important information recovered. Towards this goal, we apply the ā€˜perceptual information preservation algorithmā€™, proposed by Petrovic and Xydeas after slight modification. We observe a low information recovery due to losses suffered during the rank-order encoding and decoding processes. We propose to minimise these losses to recover maximum information in minimum time from rank-order encoded images. We first maximise information recovery by using the pseudo-inverse of the filter-bank matrix to minimise losses during rankorder decoding. We then apply the biological principle of lateral inhibition to minimise losses during rank-order encoding. In doing so, we propose the Filteroverlap Correction algorithm. To test the perfomance of rank-order codes in a biologically realistic model, we design and simulate a model of the foveal-pit ganglion cells of the retina keeping close to biological parameters. We use this as a rank-order encoder and analyse its performance relative to VanRullen and Thorpeā€™s retinal model

    Information recovery from rank-order encoded images

    Get PDF
    The work described in this paper is inspired by SpikeNET, a system developed to test the feasibility of using rank-order codes in modelling largescale networks of asynchronously spiking neurons. The rank-order code theory proposed by Thorpe concerns the encoding of information by a population of spiking neurons in the primate visual system. The theory proposes using the order of firing across a network of asynchronously firing spiking neurons as a neural code for information transmission. In this paper we aim to measure the perceptual similarity between the image input to a model retina, based on that originally designed and developed by VanRullen and Thorpe, and an image reconstructed from the rank-order encoding of the input image. We use an objective metric originally proposed by Petrovic to estimate perceptual edge preservation in image fusion which, after minor modifcations, is very much suited to our purpose. The results show that typically 75% of the edge information of the input stimulus is retained in the reconstructed image, and we show how the available information increases with successive spikes in the rank-order code

    Fine-grained or coarse-grained? Strategies for implementing parallel genetic algorithms in a programmable neuromorphic platform

    Get PDF
    Genetic Algorithm (GA) is one of popular heuristic-based optimization methods that attracts engineers and scientists for many years. With the advancement of multi- and many-core technologies, GAs are transformed into more powerful tools by parallelising their core processes. This paper describes a feasibility study of implementing parallel GAs (pGAs) on a SpiNNaker. As a many-core neuromorphic platform, SpiNNaker offers a possibility to scale-up a parallelised algorithm, such as a pGA, whilst offering low power consumption on its processing and communication overhead. However, due to its small packets distribution mechanism and constrained processing resources, parallelising processes of a GA in SpiNNaker is challenging. In this paper we show how a pGA can be implemented on SpiNNaker and analyse its performance. Due to inherently numerous parameter and classification of pGAs, we evaluate only the most common aspects of a pGA and use some artificial benchmarking test functions. The experiments produced some promising results that may lead to further developments of massively parallel GAs on SpiNNaker

    Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural ordinary differential equations

    Get PDF
    Although double-precision floating-point arithmetic currently dominates high-performance computing, there is increasing interest in smaller and simpler arithmetic types. The main reasons are potential improvements in energy efficiency and memory footprint and bandwidth. However, simply switching to lower-precision types typically results in increased numerical errors. We investigate approaches to improving the accuracy of reduced-precision fixed-point arithmetic types, using examples in an important domain for numerical computation in neuroscience: the solution of Ordinary Differential Equations (ODEs). The Izhikevich neuron model is used to demonstrate that rounding has an important role in producing accurate spike timings from explicit ODE solution algorithms. In particular, fixed-point arithmetic with stochastic rounding consistently results in smaller errors compared to single precision floating-point and fixed-point arithmetic with round-to-nearest across a range of neuron behaviours and ODE solvers. A computationally much cheaper alternative is also investigated, inspired by the concept of dither that is a widely understood mechanism for providing resolution below the least significant bit (LSB) in digital signal processing. These results will have implications for the solution of ODEs in other subject areas, and should also be directly relevant to the huge range of practical problems that are represented by Partial Differential Equations (PDEs).Comment: Submitted to Philosophical Transactions of the Royal Society

    Editorial asynchronous architecture

    Get PDF
    Journal ArticleAsynchronous design is enjoying a worldwide resurgence of interest following several decades in obscurity. Many of the early computers employed asynchronous design techniques, but since the mid 1970s almost all digital design has been based around the use of a central clock. The clock simplifies most aspects of design and offers methodologies which are straightforward and easy to automate. These benefits have helped digital engineers to take advantage of the ever-expanding resource at their disposal, while keeping design costs under control. Comprehensive CAD systems used pervasively in industry, and targeted specifically at synchronous design styles, are one way of achieving this

    Synapse-Centric mapping of cortical models to the spiNNaker neuromorphic architecture

    Get PDF
    While the adult human brain has approximately 8.8 Ɨ 1010 neurons, this number is dwarfed by its 1 Ɨ 1015 synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows 4Ɨ more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously

    Quantization Framework for Fast Spiking Neural Networks

    Get PDF
    Compared with artificial neural networks (ANNs), spiking neural networks (SNNs) offer additional temporal dynamics with the compromise of lower information transmission rates through the use of spikes. When using an ANN-to-SNN conversion technique there is a direct link between the activation bit precision of the artificial neurons and the time required by the spiking neurons to represent the same bit precision. This implicit link suggests that techniques used to reduce the activation bit precision of ANNs, such as quantization, can help shorten the inference latency of SNNs. However, carrying ANN quantization knowledge over to SNNs is not straightforward, as there are many fundamental differences between them. Here we propose a quantization framework for fast SNNs (QFFS) to overcome these difficulties, providing a method to build SNNs with enhanced latency and reduced loss of accuracy relative to the baseline ANN model. In this framework, we promote the compatibility of ANN information quantization techniques with SNNs, and suppress ā€œoccasional noiseā€ to minimize accuracy loss. The resulting SNNs overcome the accuracy degeneration observed previously in SNNs with a limited number of time steps and achieve an accuracy of 70.18% on ImageNet within 8 time steps. This is the first demonstration that SNNs built by ANN-to-SNN conversion can achieve a similar latency to SNNs built by direct training
    • ā€¦
    corecore